Search Results: "fog"

3 May 2014

Andrea Veri: Adding reCAPTCHA support to Mailman

The GNOME and many other infrastructures have been recently attacked by an huge amount of subscription-based spam against their Mailman istances. What the attackers were doing was simply launching a GET call against a specific REST API URL passing all the parameters it needed for a subscription request (and confirmation) to be sent out. Understanding it becomes very easy when you look at the following example taken from our apache.log:
May 3 04:14:38 restaurant apache: 81.17.17.90, 127.0.0.1 - - [03/May/2014:04:14:38 +0000] "GET /mailman/subscribe/banshee-list?email=example@me.com&fullname=&pw=123456789&pw-conf=123456789&language=en&digest=0&email-button=Subscribe HTTP/1.1" 403 313 "http://spam/index2.html" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36"
As you can the see attackers were sending all the relevant details needed for the subscription to go forward (and specifically the full name, the email, the digest option and the password for the target list). At first we tried to either stop the spam by banning the subnets where the requests were coming from, then when it was obvious that more subnets were being used and manual intervention was needed we tried banning their User-Agents. Again no luck, the spammers were smart enough to change it every now and then making it to match an existing browser User-Agent. (with a good percentage to have a lot of false-positives) Now you might be wondering why such an attack caused a lot of issues and pain, well, the attackers made use of addresses found around the web for their malicius subscription requests. That means we received a lot of emails from people that have never heard about the GNOME mailing lists but received around 10k subscription requests that were seemingly being sent by themselves. It was obvious we needed to look at a backup solution and luckily someone on our support channel suggested the freedesktop.org sysadmins recently added CAPTCHAs support to Mailman. I m now sharing the patch and providing a few more details on how to properly set it up on either DEB or RPM based distributions. Credits for the patch should be given to Debian Developer Tollef Fog Heen, who has been so kind to share it with us. Before patching your installation make sure to install the python-recaptcha package (tested on Debian with Mailman 2.1.15) on DEB based distributions and python-recaptcha-client on RPM based distributions. (I personally tested it against Mailman release 2.1.15, RHEL 6) The Patch
diff --git a/Mailman/Cgi/listinfo.py b/Mailman/Cgi/listinfo.py
index 4a54517..d6417ca 100644
--- a/Mailman/Cgi/listinfo.py
+++ b/Mailman/Cgi/listinfo.py
@@ -22,6 +22,7 @@
 
 import os
 import cgi
+import sys
 
 from Mailman import mm_cfg
 from Mailman import Utils
@@ -30,6 +31,8 @@ from Mailman import Errors
 from Mailman import i18n
 from Mailman.htmlformat import *
 from Mailman.Logging.Syslog import syslog
+sys.path.append("/usr/share/pyshared")
+from recaptcha.client import captcha
 
 # Set up i18n
 _ = i18n._
@@ -200,6 +203,9 @@ def list_listinfo(mlist, lang):
     replacements[''] = mlist.FormatFormStart('listinfo')
     replacements[''] = mlist.FormatBox('fullname', size=30)
 
+    # Captcha
+    replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=False)
+
     # Do the expansion.
     doc.AddItem(mlist.ParseTags('listinfo.html', replacements, lang))
     print doc.Format()
diff --git a/Mailman/Cgi/subscribe.py b/Mailman/Cgi/subscribe.py
index 7b0b0e4..c1c7b8c 100644
--- a/Mailman/Cgi/subscribe.py
+++ b/Mailman/Cgi/subscribe.py
@@ -21,6 +21,8 @@ import sys
 import os
 import cgi
 import signal
+sys.path.append("/usr/share/pyshared")
+from recaptcha.client import captcha
 
 from Mailman import mm_cfg
 from Mailman import Utils
@@ -132,6 +130,17 @@ def process_form(mlist, doc, cgidata, lang):
     remote = os.environ.get('REMOTE_HOST',
                             os.environ.get('REMOTE_ADDR',
                                            'unidentified origin'))
+
+    # recaptcha
+    captcha_response = captcha.submit(
+        cgidata.getvalue('recaptcha_challenge_field', ""),
+        cgidata.getvalue('recaptcha_response_field', ""),
+        mm_cfg.RECPTCHA_PRIVATE_KEY,
+        remote,
+        )
+    if not captcha_response.is_valid:
+        results.append(_('Invalid captcha'))
+
     # Was an attempt made to subscribe the list to itself?
     if email == mlist.GetListEmail():
         syslog('mischief', 'Attempt to self subscribe %s: %s', email, remote)
Additional setup Then on the /var/lib/mailman/templates/en/listinfo.html template (right below <mm-digest-question-end>) add:
      <tr>
        <td>Please fill out the following captcha</td>
        <td><mm-recaptcha-javascript></TD>
      </tr>
Make also sure to generate a public and private key at https://www.google.com/recaptcha and add the following paramaters on your mm_cfg.py file: Loading reCAPTCHAs images from a trusted HTTPS source can be done by changing the following line:
replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=False)
to
replacements[''] = captcha.displayhtml(mm_cfg.RECAPTCHA_PUBLIC_KEY, use_ssl=True)
EPEL 6 related details A few additional details should be provided in case you are setting this up against a RHEL 6 host: (or any other machine using the EPEL 6 package python-recaptcha-client-1.0.5-3.1.el6) Importing the recaptcha.client module will fail for some strange reason, importing it correctly can be done this way:
ln -s /usr/lib/python2.6/site-packages/recaptcha/client /usr/lib/mailman/pythonlib/recaptcha
and then fix the imports also making sure sys.path.append( /usr/share/pyshared ) is not there:
from recaptcha import captcha
That s not all, the package still won t work as expected given the API_SSL_SERVER, API_SERVER and VERIFY_SERVER variables on captcha.py are outdated (filed as bug #1093855), substitute them with the following ones:
API_SSL_SERVER="https://www.google.com/recaptcha/api"
API_SERVER="http://www.google.com/recaptcha/api"
VERIFY_SERVER="www.google.com"
And then on line 76:
url = "https://%s/recaptcha/api/verify" % VERIFY_SERVER,
That should be all! Enjoy!

10 February 2014

Russell Coker: Fingerprints and Authentication

Dustin Kirkland wrote an interesting post about fingerprint authentication [1]. He suggests using fingerprints for identifying users (NOT authentication) and gives an example of a married couple sharing a tablet and using fingerprints to determine who s apps are loaded. In response Tollef Fog Heen suggests using fingerprints for lightweight authentication, such as resuming a session after a toilet break [2]. I think that one of the best comments on the issue of authentication for different tasks is in XKCD comic 1200 [3]. It seems obvious that the division between administrator (who installs new device drivers etc) and user (who does everything from playing games to online banking with the same privileges) isn t working, and never could work well particularly when the user in question installs their own software. I think that one thing which is worth considering is the uses of a signature. A signature can be easily forged in many ways and they often aren t checked well. It seems that there are two broad cases of using a signature, one is to enter into legally binding serious contract such as a mortgage (where wanting to sign is the relevant issue) and the other is cases where the issue doesn t matter so much (EG signing off on a credit card purchase where the parties at risk can afford to lose money on occasion for efficient transactions). Signing is relatively easy but that s because it either doesn t matter much or because it s just a legal issue which isn t connected to authentication. The possibility of serious damage (sending life savings or incriminating pictures to criminals in another jurisdiction) being done instantly never applied to signatures. It seems to me that in many ways signatures are comparable to fingerprints and both of them aren t particularly good for authentication to a computer. In regard to Tollef s ideas about lightweight authentication I think that the first thing that would be required is direct user control over the authentication required to unlock a system. I have read about some Microsoft research into a computer monitoring the office environment to better facilitate the user s requests, an obvious extension to such research would be to have greater unlock requirements if there are more unknown people in the area or if the device is in a known unsafe location. But apart from that sort of future development it seems that having the user request a greater or lesser authentication check either at the time they lock their session or by policy would make sense. Generally users have a reasonable idea about the risk of another user trying to login with their terminal so user should be able to decide that a toilet break when at home only requires a fingerprint (enough to keep out other family members) while a toilet break at the office requires greater authentication. Mobile devices could use GPS location to determine unlock requirements, GPS can be forged, but if your attacker is willing and able to do that then you have a greater risk than most users. Some users turn off authentication on their phone because it s too inconvenient. If they had the option of using a fingerprint most of the time and a password for the times when a fingerprint can t be read then it would give an overall increase in security. Finally it should be possible to unlock only certain applications. Recent versions of Android support widgets on the lock screen so you can perform basic tasks such as checking the weather forecast without unlocking your phone. But it should be possible to have different authentication requirements for various applications. Using a fingerprint scan to allow playing games or reading email in the mailing list folder would be more than adequate security. But reading the important email and using SMS probably needs greater authentication. This takes us back to the XKCD cartoon.

29 November 2013

Tollef Fog Heen: Redirect loop with interaktiv.nsb.no (and how to fix it)

I'm running a local unbound instance on my laptop to get working DNSSEC. It turns out that with the captive portal NSB (the Norwegian national rail company), this doesn't work too well and you get into an endless series of redirects. Changing resolv.conf so you use the DHCP-provided resolver stops the redirect loop and you can then log in. Afterwards, you're free to switch back to using your own local resolver.

3 October 2013

Tollef Fog Heen: Fingerprints as lightweight authentication

Dustin Kirkland recently wrote that "Fingerprints are usernames, not passwords". I don't really agree, I think fingerprints are fine for lightweight authentication. iOS at least allows you to only require a pass code after a time period has expired, so you don't have to authenticate to the phone all the time. Replacing no authentication with weak authentication (but only for a fairly short period) will improve security over the current status, even if it's not perfect. Having something similar for Linux would also be reasonable, I think. Allow authentication with a fingerprint if I've only been gone for lunch (or maybe just for a trip to the loo), but require password or token if I've been gone for longer. There's a balance to be struck between convenience and security.

27 September 2013

Michal &#268;iha&#345;: Spring in Bohemian-Moravian Highlands

After quite some delay, I got to selecting some photos for my gallery. In spring, we've spend few days in Bohemian-Moravian Highlands with Pentax Friends. On the program there was of course taking some pictures and drinking beer or wine :-). You can find some small but nice waterfalls: In villages, there live wild animals: As the weather was not really nice, we had nice opportunity to shoot some pictures in fog: And last but not least, we've spent almost whole day taking pictures in Pilgrimage Church of Saint John of Nepomuk: The trip was really nice, though we could use better weather as most of the morning there were really no nice conditions for taking pictures.

Filed under: English Photography Travelling 0 comments Flattr this!

5 September 2013

Vincent Sanders: Strive for continuous improvement, instead of perfection.


Kim Collins was perhaps thinking more about physical improvement but his advice holds well for software.

A lot has been written about the problems around software engineers wanting to rewrite a codebase because of "legacy" issues. Experience has taught me that refactoring is a generally better solution than rewriting because you always have something that works and can be released if necessary.

Although that observation applies to the whole of a project, sometimes the technical debt in component modules means they need reimplementing. Within NetSurf we have historically had problems when such a change was done because of the large number of supported platforms and configurations.
HistoryA year ago I implemented a Continuous Integration (CI) solution for NetSurf which, combined with our switch to GIT for revision control, has transformed our process. Making several important refactor and rewrites possible while being confident about the overall project stability.

I know it has been a year because the VPS hosting bill from Mythic turned up and we are still a very happy customer. We have taken the opportunity to extend the infrastructure to add additional build systems which is still within the NetSurf projects means.

Over the last twelve months the CI system has attempted over 100,000 builds including the projects libraries and browser. Each commit causes an attempt to build for eight platforms, in multiple configurations with multiple compilers. Because of this the response time to a commit is dependant on the slowest build slave (the mac mini OS X leopard system).

Currently this means a browser build, not including the support libraries, completes in around 450 seconds. The eleven support libraries range from 30 to 330 seconds each. This gives a reasonable response time for most operations. The worst case involves changing the core buildsystem which causes everything to be rebuilt from scratch taking some 40 minutes.

The CI system has gained capability since it was first set up, there are now jobs that:
DownsidesIt has not all been positive though, the administration required to keep the builds running has been more than expected and it has highlighted just how costly supporting all our platforms is. When I say costly I do not just refer to the physical requirements of providing build slaves but more importantly the time required.

Some examples include:
  • Procuring the hardware, installing the operating system and configuring the build environment for the OS X slaves
  • Getting the toolchain built and installed for cross compilation
  • Dealing with software upgrades and updates on the systems
  • Solving version issues with interacting parts, especially limiting is the lack of JAVA 1.6 on PPC OS X preventing jenkins updates
This administration is not interesting to me and consumes time which could otherwise be spent improving the browser. Though the benefits of having the system are considered by the development team to outweigh the disadvantages.

The highlighting of the costs of supporting so many platforms has lead us to reevaluate their future viability. Certainly the PPC mac os X port is in gravest danger of being imminently dropped and was only saved when the build slaves drive failed because there were actual users.

There is also the question of the BeOS platform which we are currently unable to even build with the CI system at all as it cannot be targeted for cross compilation and cannot run a sufficiently complete JAVA implementation to run a jenkins slave.

An unexpected side effect of publishing every CI build has been that many non developer user are directly downloading and using these builds. In some cases we get messages to the mailing list about a specific build while the rest of the job is still ongoing.

Despite the prominent warning on the download area and clear explanation on the mailing lists we still get complaints and opinions about what we should be "allowing" in terms of stability and features with these builds. For anyone else considering allowing general access to CI builds I would recommend a very clear statement of intent and to have a policy prepared for when when users ignore the statement.
Tools
Using jenkins has also been a learning experience. It is generally great but there are some issues I have which, while not insurmountable, are troubling:
Configuration and history cannot easily be stored in a revision control system.
This means our system has to be restored from a backup in case of failure and I cannot simply redeploy it from scratch.

Job filtering, especially for matrix jobs with many combinations, is unnecessarily complicated.
This requires the use of a single text line "combination filter" which is a java expression limiting which combinations are built. An interface allowing the user to graphically select from a grid similar to the output tables showing success would be preferable. Such a tool could even generate the textural combination filter if thats easier.

This is especially problematic of the main browser job which has options for label (platform that can compile the build), javascript enablement, compiler and frontend (the windowing toolkit if you prefer e.g. linux label can build both gtk and framebuffer). The filter for this job is several kilobytes of text which due to the first issue has to be cut and pasted by hand.

Handling of simple makefile based projects is rudimentary.
This has been worked around mainly by creating shell scripts to perform the builds. These scripts are checked into the repositories so they are readily modified. Initially we had the text in each job but that quickly became unmanageable.

Output parsing is limited.
Fortunately several plugins are available which mitigate this issue but I cannot help feeling that they ought to be integrated by default.

Website output is not readily modifiable.
Instead of perhaps providing a default css file and all generated content using that styling someone with knowledge of JAVA must write a plugin to change any of the look and feel of the jenkins tool. I understand this helps all jenkins instances look like the same program but it means integrating jenkins into the rest of our projects web site is not straightforward.
ConclusionIn conclusion I think the CI system is an invaluable tool for almost any non trivial software project but the implementation costs with current tools should not be underestimated.

27 June 2013

Tollef Fog Heen: Getting rid of NSCA using Python and Chef

NSCA is a tool used to submit passive check results to nagios. Unfortunately, an incompatibility was recently introduced between wheezy clients and old servers. Since I don't want to upgrade my server, this caused some problems and I decided to just get rid of NSCA completely. The server side of NSCA is pretty trivial, it basically just adds a timestamp and a command name to the data sent by the client, then changes tabs into semicolons and stuffs all of that down Nagios' command pipe. The script I came up with was:
#! /usr/bin/python
# -* coding: utf-8 -*-
import time
import sys
# format is:
# [TIMESTAMP] COMMAND_NAME;argument1;argument2; ;argumentN
#
# For passive checks, we want PROCESS_SERVICE_CHECK_RESULT with the
# format:
#
# PROCESS_SERVICE_CHECK_RESULT;<host_name>;<service_description>;<return_code>;<plugin_output>
#
# return code is 0=OK, 1=WARNING, 2=CRITICAL, 3=UNKNOWN
#
# Read lines from stdin with the format:
# $HOSTNAME\t$SERVICE_NAME\t$RETURN_CODE\t$TEXT_OUTPUT
if len(sys.argv) != 2:
    print "Usage:  0  HOSTNAME".format(sys.argv[0])
    sys.exit(1)
HOSTNAME = sys.argv[1]
timestamp = int(time.time())
nagios_cmd = file("/var/lib/nagios3/rw/nagios.cmd", "w")
for line in sys.stdin:
    (_, service, return_code, text) = line.split("\t", 3)
    nagios_cmd.write(u"[ timestamp ] PROCESS_SERVICE_CHECK_RESULT; hostname ; service ; return_code ; text \n".format
                     (timestamp = timestamp,
                      hostname = HOSTNAME,
                      service = service,
                      return_code = return_code,
                      text = text))
The reason for the hostname in the line (even though it's overridden) is to be compatible with send_nsca's input format. Machines submit check results over SSH using its excellent ForceCommand capabilities, the Chef template for the authorized_keys file looks like:
<% for host in @nodes %>
command="/usr/local/lib/nagios/nagios-passive-check-result <%= host[:hostname] %>",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa <%= host[:keys][:ssh][:host_rsa_public] %> <%= host[:hostname] %>
<% end %>
The actual chef recipe looks like:
nodes = []
search(:node, "*:*") do  n 
  # Ignore not-yet-configured nodes                                                                       
  next unless n[:hostname]
  next unless n[:nagios]
  next if n[:nagios].has_key?(:ignore)
  nodes << n
end
nodes.sort!    a,b  a[:hostname] <=> b[:hostname]  
print nodes
template "/etc/ssh/userkeys/nagios" do
  source "authorized_keys.erb"
  mode 0400
  variables( 
              :nodes => nodes
             )
end
cookbook_file "/usr/local/lib/nagios/nagios-passive-check-result" do
  mode 0555
end
user "nagios" do
  action :manage
  shell "/bin/sh"
end
To submit a check, hosts do:
printf "$HOSTNAME\t$SERVICE_NAME\t$RET\t$TEXT\n"   ssh -i /etc/ssh/ssh_host_rsa_key -o BatchMode=yes -o StrictHostKeyChecking=no -T nagios@$NAGIOS_SERVER

18 June 2013

Tollef Fog Heen: An otter, please (or, a better notification system)

Recently, there's been discussions on IRC and the debian-devel mailing list about how to notify users, typically from a cron script or a system daemon needing to tell the user their hard drive is about to expire. The current way is generally "send email to root" and for some bits "pop up a notification bubble, hoping the user will see it". Emailing me means I get far too many notifications. They're often not actionable (apt-get update failed two days ago) and they're not aggregated. I think we need a system that at its core has level and edge triggers and some way of doing flap detection. Level interrupts means "tell me if a disk is full right now". Edge means "tell me if the checksums have changed, even if they now look ok". Flap detection means "tell me if the nightly apt-get update fails more often than once a week". It would be useful if it could extrapolate some notifications too, so it could tell me "your disk is going to be full in $period unless you add more space". The system needs to be able to take in input in a variety of formats: syslog, unstructured output from cron scripts (including their exit codes), snmp, nagios notifications, sockets and fifos and so on. Based on those inputs and any correlations it can pull out of it, it should try to reason about what's happening on the system. If the conclusion there is "something is broken", it should see if it's something that it can reasonably fix by itself. If so, fix it and record it (so it can be used for notification if appropriate: I want to be told if you restart apache every two minutes). If it can't fix it, notify the admin. It should also group similar messages so a single important message doesn't drown in a million unimportant ones. Ideally, this should be cross-host aggregation. The notifications should be possible to escalate if they're not handled within some time period. I'm not aware of such a tool. Maybe one could be rigged together by careful application of logstash, nagios, munin/ganglia/something and sentry. If anybody knows of such a tool, let me know, or if you're working on one, also please let me know.

30 April 2013

Gunnar Wolf: Activities facing the next round of Trans-Pacific Partnership negotiations ( #yaratpp )

Excuse me for the rush and lack of organization... But this kind of things don't always allow for proper planning. So, please bear with my chaos ;-) What is the Trans-Pacific Partnership? Yet another secretely negotiated international agreement that, among many chapters, aims at pushing a free-market based economy, as defined by a very select few Most important to me, and to many of my readers: It includes important chapters on intellectual property and online rights. Hundreds of thousands of us along the world took part in different ways on the (online and "meat-space") demonstrations against the SOPA/PIPA laws back in February 2012. We knew back then that a similar project would attempt to bite us back: Well, here it is. Only this time, it's not only covering copyright, patents, trademark, reverse engineering, etc. TPP is basically a large-scale free trade agreement on steroids. The issue that we care about now is just one of its aspects. Thus, it's way less probable we can get a full stop for TPP as we got for SOPA. But we have to get it on the minds of as many people as possible! Learn more with this infography distributed by the EFF. Which countries? The countries currently part of TPP are Chile, Peru, New Zealand, Australia, Malaysia, Brunei, Singapore, Vietnam And, of course, the USA. Mexico, Canada and Japan are in the process of joining the partnership. A group of Mexican senators are travelling to Lima to take part of this round. Image by Colin Beardon (It's Our Future, NZ) What are we doing about it? As much as possible! I tried to tune in with Peru's much more organized call The next round of negotiations will be in Lima, Peru, between May 14 and 24. Their activities are wildly more organized than ours: They are planning a weekend-long Camping for Internet freedom, with 28 hours worth of activities. As for us, our activities will be far more limited, but I still hope to have an interesting session: Poster design by Gacela. Thanks! This Friday, we will have Aula Magna, Facultad de Ingenier a, UNAM, M xico DF, from 10AM and until 3PM. We do not have a clear speakers program, as the organization was quite rushed. I have invited several people who I know will be interesting to hear, and I expect a good part of the discussion to be a round table. I expect we will:
  1. Introduce people working on different corners of this topic
  2. Explain in some more detail what TPP is about
  3. Come up with actions we can take to influence Mexico's joining of TPP
  4. And this will be at Facultad de Ingenier a. Another explicit goal of this session will be, of course, to bring the topic closer to the students!
We want you! So... I am posting this message also as a plead for help. Do you think you can participate here? Were you among the local organizers for the anti-SOPA movement? Do you have some insight on TPP you can share? Do you have some gear to film+encode the talks? (as they will surely be interesting!) Or, is the topic just interesting for you? Well, please come and join us! So, again: Friday, 2012-05-03, 10:00-15:00
AttachmentSize
Poster. Desgin by Gacela Thanks!457.49 KB
Infography about TPP distributed by the EFF392.45 KB
"Here, let me sign this for you". Image by Colin Beardon.27.08 KB

Gunnar Wolf: Activities facing the next round of Trans-Pacific Partnership negotiations ( #yaratpp #tpp #internetesnuestra )

Excuse me for the rush and lack of organization... But this kind of things don't always allow for proper planning. So, please bear with my chaos ;-) What is the Trans-Pacific Partnership? Yet another secretely negotiated international agreement that, among many chapters, aims at pushing a free-market based economy, as defined by a very select few Most important to me, and to many of my readers: It includes important chapters on intellectual property and online rights. Hundreds of thousands of us along the world took part in different ways on the (online and "meat-space") demonstrations against the SOPA/PIPA laws back in February 2012. We knew back then that a similar project would attempt to bite us back: Well, here it is. Only this time, it's not only covering copyright, patents, trademark, reverse engineering, etc. TPP is basically a large-scale free trade agreement on steroids. The issue that we care about now is just one of its aspects. Thus, it's way less probable we can get a full stop for TPP as we got for SOPA. But we have to get it on the minds of as many people as possible! Learn more with this infography distributed by the EFF. Which countries? The countries currently part of TPP are Chile, Peru, New Zealand, Australia, Malaysia, Brunei, Singapore, Vietnam And, of course, the USA. Mexico, Canada and Japan are in the process of joining the partnership. A group of Mexican senators are travelling to Lima to take part of this round. Image by Colin Beardon (It's Our Future, NZ) What are we doing about it? As much as possible! I tried to tune in with Peru's much more organized call The next round of negotiations will be in Lima, Peru, between May 14 and 24. Their activities are wildly more organized than ours: They are planning a weekend-long Camping for Internet freedom, with 28 hours worth of activities. As for us, our activities will be far more limited, but I still hope to have an interesting session: Poster design by Gacela. Thanks! This Friday, we will have Aula Magna, Facultad de Ingenier a, UNAM, M xico DF, from 10AM and until 3PM. We do not have a clear speakers program, as the organization was quite rushed. I have invited several people who I know will be interesting to hear, and I expect a good part of the discussion to be a round table. I expect we will:
  1. Introduce people working on different corners of this topic
  2. Explain in some more detail what TPP is about
  3. Come up with actions we can take to influence Mexico's joining of TPP
  4. And this will be at Facultad de Ingenier a. Another explicit goal of this session will be, of course, to bring the topic closer to the students!
We want you! So... I am posting this message also as a plead for help. Do you think you can participate here? Were you among the local organizers for the anti-SOPA movement? Do you have some insight on TPP you can share? Do you have some gear to film+encode the talks? (as they will surely be interesting!) Or, is the topic just interesting for you? Well, please come and join us! Some more informative links BE THERE! So, again: Friday, 2012-05-03, 10:00-15:00 [Update] So, 2012-05-03 came and went. And thankfully, Alfredo was there to record most of the talk! So, you can download the video: Gunnar Wolf, Salvador Alc ntar: Qu es TPP? Por qu me debe preoucpar? Qu podemos hacer?
AttachmentSize
Poster. Design by Gacela Thanks!457.49 KB
Infography about TPP distributed by the EFF392.45 KB
"Here, let me sign this for you". Image by Colin Beardon.27.08 KB

14 April 2013

Andreas Metzler: balance sheet snowboarding season 2012/13

All in a below average season. Although we had lots of snow in December, my first day on-piste was December 22nd. Riding in snowstorm or thick fog is just not my kind of thing. The christmas holiday season was absurdely warm, getting rid of most of the snow again. I managed 7 snow-days from December 22nd to January 1st, but this was more sport than fun. Really sunny days were rare in the whole winter. Due to minor injury and minor illness I had to take long breaks from snowboarding (just two snow days in the periode from January 8th to February 15th!). Early easter cut the season short. This year I ended up in skiline.cc's top-100 list for both most meters of altitude in a single day and the whole season which shows that other people had a short season, too. On the plus side, we had enough and good snow. This is also evident from the balance sheet below, I almost always went to Diedamskopf where there is almost no artificial snow. Here is the balance sheet:
2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13
number of (partial) days2517293730302523
Dam ls10105101623104
Diedamskopf15424231341419
Warth/Schr cken03041310
total meters of altitude12463474096219936226774202089203918228588203562
highscore10247m8321m12108m11272m11888m10976m13076m13885m
# of runs309189503551462449516468

8 April 2013

Matthias Klumpp: Tanglu status report

Hello everyone! I am very excited to report about the awesome progress we made with Tanglu, the new Debian-based Linux-Distribution. :) First of all, some off-topic info: I don t feel comfortable with posting too much Tanglu stuff to Planet-KDE, as this is often not KDE-related. So, in future Planet-KDE won t get Tanglu information, unless it is KDE-related ;-) You might want to take a look at Planet-Tanglu for (much) more information. So, what happened during the last weeks? Because I haven t had lectures, I nearly worked full-time on Tanglu, setting up most of the infrastructure we need. (this will change with the next week, where I have lectures again, and I also have work to do on other projects, not just Tanglu ^^) Also, we already have an awesome community of translators, designers and developers. Thanks to them, the Tanglu-Website is now translated to 6 languages, more are in the pipeline and will be merged later. Also, a new website based on the Django framework is in progress. The Logo-Contest We ve run a logo-contest, to find a new and official Tanglu logo, as the previous logo draft was too close to the Debian logo (I asked the trademark people at Debian). More than 30 valid votes (you had to be subscribed to a Tanglu Mailinglist) were received for 7 logo proposals, and we now have a final logo: Tanglu Logo (Text) I like it very much :) Fun with dak I decided to use dak, the Debian Archive Kit, to handle the Tanglu archive. Choosing dak over smaller and easy-to-use solutions had multiple reasons, the main reason is that dak is way more flexible than the smaller solution (like reprepro or min-dak) and able to handle the large archive of Tanglu. Also, dak is lightning fast. And I would have been involved with dak sooner or later anyway, because I will implement the DEP-11 extension to the Debian Archive later (making the archive application-friendly). duckWorking with dak is not exactly fun. The documentation is not that awesome, and dak contains many hardcoded stuff for Debian, e.g. it often expects the unstable suite to be present. Also, running dak on Debian Wheezy turned out to be a problem, as the Python module apt_pkg changed the API and dak naturally had problems with that. But with the great help of some Debian ftpmasters (many thanks to that!), dak is now working for Tanglu, managing the whole archive. There are still some quirks which need to be fixed, but the archive is in an usable state, accepting and integrating packages. The work on dak is also great for Debian: I resolved many issues with non-Debian dak installations, and made many parts of dak Wheezy-proof. Also, I added a few functions which might also be useful for Debian itself. All patches have of course been submitted to upstream-dak. ;-) Wanna-build and buildds GearThis is also nearly finished :) Wanna-build, the software which manages all buildds for an archive, is a bit complicated to use. I still have some issues with it, but it does it s job so far. (need to talk to the Debian wanna-build admins for help, e.g. wanna-build seems to be unable to handle arch:all-only packages, also, build logs are only submitted in parts) The status of Tanglu builds can be viewed at the provisoric Buildd-Status pages. Setting up a working buildd is also a tricky thing, it involves patching sbuild to escape bug #696841 and applying various workarounds to make the buildd work and upload packages correctly. I will write instructions how to set-up and maintain a buildd soon. At time, we have only one i386 buildd up and running, but more servers (in numbers: 3) are prepared and need to be turned into buildds. After working on Wanna-build and dak, I fully understand why Canonical developed Launchpad and it s Soyuz module for Ubuntu. But I think we might be able to achieve something similar, using just the tools Debian already uses (maybe a little less confortable than LP, but setting up an own LP instance would have been much more trouble). Debian archive import The import of packages from the Debian archive has finished. Importing the archive resulted in many issues and some odd findings (I didn t know that there are packages in the archive which didn t receive an upload since 2004!), but it has finally finished, and the archive is in a consistent state at time. To have a continuous package import from Debian while a distribution is in development, we need some changes to wanna-build, which will hopefully be possible. Online package search The Online-Package-Search is (after resolving many issues, who expected that? :P ) up and running. You can search for any package there. Some issues are remaining, e.g. the file-contents-listing doesn t work, and changelog support is broken, but the basic functionality is there. Tanglu Bugtracker We now also have a bugtracker which is based on the Trac software. The Tanglu-Bugtracker is automatically synced with the Tanglu archive, meaning that you find all packages in Trac to report bugs against them. The dak will automatically update new packages every day. Trac still needs a few confort-adjustments, e.g. submitting replies via email or tracking package versions. Tanglu base system tanglu-platform-smallThe Tanglu metapackages have been published in a first alpha version. We will support GNOME-3 and KDE4, as long as this is possible (= enough people working on the packaging). The Tanglu packages will also depend on systemd, which we will need in GNOME anyway, and which also allows some great new features in KDE. Side-effect of using systemd is at least for the start that Tanglu boots a bit slow, because we haven t done any systemd adjustments, and because systemd is very old. We will have to wait for the systemd and udev maintainers to merge the packages and release a new version first, before this will improve. (I don t want to do this downstream in Tanglu, because I don t know the plans for that at Debian (I only know the information Tollef Fog Heen & Co. provided at FOSDEM)) The community The community really surprised me! We got an incredible amount of great feedback on Tanglu, and most of the people liked the idea of Tanglu. I think we are one of the less-flamed new distributions ever started ;-) . Also, without the very active community, kickstarting Tanglu would not have been possible. My guess was that we might be able to have something running next year. Now, with the community help, I see a chance for a release in October :) The only thing people complained about was the name of the distribution. And to be really honest, I am not too happy with the name. But finding a name was an incredibe difficult process (finding something all parties liked), and Tanglu was a good compromise. Tanglu has absolutely no meaning, it was taken because it sounded interesting. The name was created by combining the Brazilian Tangerine (Clementine) and the German Iglu (igloo). I also dont think the name matters that much, and I am more interested in the system itself than the name of it. Also, companies produce a lot of incredibly weird names, Tanglu is relatively harmless compared to that ;-) . In general, thanks to everyone participating in Tanglu! You are driving the project forward! The first (planned) release I hereby announce the name of the first Tanglu release, 1.1 Aequorea Victoria . It is Daniel s fault that Tanglu releases will be named after jellyfishes, you can ask him why if you want ;-) I picked Aequorea, because this kind of jellyfish was particularly important for research in molecular biology. The GFP protein, a green fluorescent protein (=GFP), caused a small revolution in science and resulted in a Nobel Prize in 2008 for the researchers involved in GFP research (for the interested people: You can tag proteins with GFP and determine their position using light microscopy. GFP also made many other fancy new lab methods possible). Because Tanglu itself is more or less experimental at time, I found the connection to research just right for the very first release. We don t have a time yet when this version will be officially released, but I expect it to be in October, if the development speed increases a little and more developers get interested and work on it. Project Policy We will need to formalize the Tanglu project policy soon, both the technical and the social policies. In general, regarding free software and technical aspects, we strictly adhere to the Dbian Free Software Guidelines, the Debian Social Contract and the Debian Policy. Some extra stuff will be written later, please be patient! Tanglu OIN membership I was approached by the Open Invention Network, to join it as member. In general, I don t have objections to do that, because it will benefit Tanglu. However, the OIN has a very tolerant public stance on software patents, which I don t like that much. Debian did not join the OIN for this reason. For Tanglu, I think we could still join the OIN without someone thinking that we support the stance on software patents. Joining would simply be pragmatic: We support the OIN as a way to protect the Linux ecosystem from software patents, even if we don t like the stance on software patents and see it differently. Because this affects the whole Tanglu project, I don t want to decide this alone, but get some feedback from the Tanglu community before making a decision. Can I install Tanglu now? Yes and no. We don t provide installation images yet, so trying Tanglu is a difficult task (you need to install Debian and then upgrade it to Tanglu) if you want to experiment with it, I recomment trying Tanglu in a VM. I want to help! Great, then please catch one of us on IRC or subscribe to the mailinglists. The best thing is not to ask for work, but suggest something you want to do, others will then tell you if that is possible and maybe help with the task. Packages can for now only be uploaded by Debian Developers, Ubuntu Developers or Debian Maintainers who contacted me directly and whose keys have been verified. This will be changed later, but at the current state of the Tanglu archive (= less safety checks for packages), I only want people to upload stuff who definitely have the knowledge to create sane packages (you can also proove that otherwise, of course). We will later establish a new-member process. If you want to provide a Tanglu archive mirror, we would be very happy, so that the main server doesn t have to carry all the load. If you have experience in creating Linux Live-CDs or have worked with the Ubiquity installer, helping with these parts would be awesome! Unfortunately, we cannot reuse parts of Linux Mint Debian, because many of their packages don t build from source and are repackaged binaries, which is a no-go for the Tanglu main archive. Sneak peek And here is a screenshot of the very first Tanglu installation (currently more Debian than Tanglu):

tanglu_sneak-peek_screenshot Something else I am involved in Debian for a very long time now, first as Debian Maintainer and then as Debian Developer and I never thought much about the work the Debian system administrators do. I didn t know how dak worked or how Wanna-build handles the buildds and what exactly the ftpmasters have to do. By not knowing, I mean I knew the very basic theory and what these people do. But this is something different than experiencing how much work setting up and maintaining the infrastructure is and what an awesome job the people do for Debian, keeping it all up and running and secure! Kudos for that, to all people maintaining Debian infrastructure! You rock! (And I will never ever complain about slow buildds or packages which stay in NEW for too long ;-) )

22 March 2013

Tollef Fog Heen: Sharing an SSH key, securely

Update: This isn't actually that much better than letting them access the private key, since nothing is stopping the user from running their own SSH agent, which can be run under strace. A better solution is in the works. Thanks Timo Juhani Lindfors and Bob Proulx for both pointing this out. At work, we have a shared SSH key between the different people manning the support queue. So far, this has just been a file in a directory where everybody could read it and people would sudo to the support user and then run SSH. This has bugged me a fair bit, since there was nothing stopping a person from making a copy of the key onto their laptop, except policy. Thanks to a tip, I got around to implementing this and figured writing up how to do it would be useful. First, you need a directory readable by root only, I use /var/local/support-ssh here. The other bits you need are a small sudo snippet and a profile.d script. My sudo snippet looks like:
Defaults!/usr/bin/ssh-add env_keep += "SSH_AUTH_SOCK"
%support ALL=(root)  NOPASSWD: /usr/bin/ssh-add /var/local/support-ssh/id_rsa
Everybody in group support can run ssh-add as root. The profile.d goes in /etc/profile.d/support.sh and looks like:
if [ -n "$(groups   grep -E "(^  )support(  $)")" ]; then
    export SSH_AUTH_ENV="$HOME/.ssh/agent-env"
    if [ -f "$SSH_AUTH_ENV" ]; then
        . "$SSH_AUTH_ENV"
    fi
    ssh-add -l >/dev/null 2>&1
    if [ $? = 2 ]; then
        mkdir -p "$HOME/.ssh"
        rm -f "$SSH_AUTH_ENV"
        ssh-agent > "$SSH_AUTH_ENV"
        . "$SSH_AUTH_ENV"
    fi
    sudo ssh-add /var/local/support-ssh/id_rsa
fi
The key is unavailable for the user in question because ssh-add is sgid and so runs with group ssh and the process is only debuggable for root. The only thing missing is there's no way to have the agent prompt to use a key and I would like it to die or at least unload keys when the last session for a user is closed, but that doesn't seem trivial to do.

18 March 2013

Daniel Pocock: Switzerland Glacier Express

With many people coming to visit this year, both for DebConf13 and holidays, I'm going to start sharing some Swiss travel tips here in the blog. One of Switzerland's most popular tourist promotions is the Glacier Express railway. Here are the facts and a cool video. Avoid tickets/itineraries with fixed dates for visiting the mountains. The full Glacier Express journey is about 6 hours, and if it is cloudy or foggy, it is 6 hours in a train just like any other train. Some days really are cloudy and you won't see a single mountain. Try and make sure you have several flexible days in Switzerland so you can go up the mountains when the weather is optimal. Check the weather using the live webcam service Below is a typical webcam image for a bad day (I was up there Saturday skiing and it was perfect, Sunday was every tourist's nightmare). Compare this picture with the video further down: You can find a live feed like this for most Swiss mountain destinations with a web search Regular passenger trains run on the same route You don't need to buy an expensive Glacier Express' ticket. A regular passenger train does the same route every hour. No fixed reservation is necessary either and you can hop off and back on again along the way at many little stops. Oberalp Pass, at 2000 meters, is a good place to stop and hike (up to the peaks if you are keen, down to Andermatt if you want an easy walk with breathtaking views) What's more, the expensive trains are fully sealed and you can't open the windows. The regular trains do let you open the windows, so you can fully immerse yourself in the experience like this riding down from Natschen to Andermatt (1,500m): <video controls="" height="340" poster="http://danielpocock.com/sites/danielpocock.com/files/train-poster.png" width="560">
<source src="http://danielpocock.com/sites/danielpocock.com/files/DSC_1776.mp4" type="video/mp4"></source>
<source src="http://danielpocock.com/sites/danielpocock.com/files/DSC_1776.webm" type="video/webm"></source>
</video> To plan an itinerary, you can check Swiss train timetables at the official web site http://www.sbb.ch/en

24 February 2013

Andrew Pollock: [tech] On owning a Nissan Leaf

I'll soon be disposing of the Nissan Leaf that we leased a few months ago, so I thought it a useful exercise to write about my experiences with it. I am not a car man. I am a gadget man. For me, driving is a means to an end, and I'm much more interested in what I call the "cabin" experience, than the "driving" experience, so this is going to slanted much more that way. That said, I found the Nissan Leaf a fun car to drive, from my limited experiences of having fun driving cars. I liked how responsive it was when you put your foot down. It has two modes of operation, "D" and "Eco". I've actually taken to driving it in "Eco" mode most of the time, as it squeezes more range out of the batteries, but occasionally I'll pop it back into "D" to have a bit of fun. The main difference between the two modes, from a driving perspective, is it seems to limit the responsiveness of the accelerator. In "Eco" mode it feels more like you're driving in molasses, whereas in "D" mode, when you put your foot down, it responses instantly. "D" is great for dragging people off at the lights. It's a very zippy little car in "D" mode. It feels lighter. I've noticed that it has a bit of a penchant for over steering. Or maybe that's just my driving. If I have floored it a bit to take a right turn into oncoming traffic, I've noticed slight over steering. That's about all the driving type "real car stuff" I'll talk about. Now to the driver's "cabin experience". It's absolutely fabulous. I love sitting in the driver's seat of this car. Firstly, the seat itself is heated (in fact all of them are). As is the steering wheel. Nissan has gone to great lengths to allow you to avoid needing to run the car's heating system to heat the car, as doing so immediately drops at least 10 miles off the range. Unfortunately I found the windscreen had a tendency to fog up in the winter rainy periods, so I'd have to intermittently fire up the air conditioning to defog the windscreen. Of course, in the summer months, you're going to want to run the AC to cool down, so the range hit in that situation is unavoidable. I've only had this car from late Autumn until late Winter so far, so that hasn't been an issue I've had to contend with. The dashboard is all digital, and looks relatively high tech, which appeals to my inner geek. It's a dashboard. It tells you stuff. The stuff you'd expect it to tell you. Enough said. The audio system is nice. It supports Bluetooth audio, so one can be streaming Pandora from one's phone, through the sound system, for example. Or listening to audio from the phone. There's also a USB port, and it will play various audio files from USB storage. I found the way it chose to sort the files on a USB stick to be slightly surprising though. I haven't invested the time to figure out how I should be naming files so that they play in the order I expect them to play. The ability to play from USB storage compensates nicely for the fact that it only has a single CD player. (We have a 6 disc stacker in our 2006 Prius). The car also came with a 3 month free trial of Sirius XM satellite radio. This was fun. The only dance music FM station in the Bay Area has a weak signal in Mountain View, and I hate dance music with static, whereas there was an excellent electronic music station that I could listen to in glorious high definition. As long as I wasn't driving under a bridge. There's no way I'd pay money for satellite radio, but it was a fun gimmick to try out. The navigation system is really, really good. I haven't bothered using Google Maps on my phone at all. It gives such good spoken directions, that you don't even need to have the map on the screen. It names all the streets. I couldn't figure out a way to get distances in metric. The telematics service, Carwings, is probably my favourite feature. This is what really makes it feel like a car of the future. Through the companion Android application, I can view the charging status (if it's plugged in) or just check the available range (if it's not plugged in). From a web browser, I can plan a route, and push the route to the vehicle's navigation system. If the car is plugged in, I can also remotely turn on the vehicle's climate control system, pre-warming or cooling the car. It's a little thing, but the door unlocking annoyed me a little bit. I'm used to the Prius, where if you unlock the boot (that's trunk, Americans), or the front passenger door, all the doors unlocked. This was a convenient way of unlocking the car for multiple people as you approached it. With the Leaf, unlocking the boot only unlocks the boot. Unlocking the front passenger door only unlocks that door. It requires a second unlock action to unlock all the doors. I've found this to be slightly cumbersome when trying to unlock the car for everyone all at once, quickly (like when it's raining). Another minor annoyance is the headlights. I've gotten into the habit of driving with the headlights on all the time, because I believe it's safer. In the Prius, one could just leave the headlights switched to the "on" position, and they'd turn off whenever the driver's door was opened after the car was switched off. If you try that in the Leaf, the car beeps at you to warn you you've left the headlights on. It has an "auto" mode, where the car will choose when to turn the headlights on, based on ambient light conditions. In that case, when you turn the car off, it'll leave the headlights on for a configurable period of time and then turn them off. This is actually slightly unsettling, because it makes you think you've left your headlights on. The default timeout was quite long as well, something like 60 seconds. The way multiple Bluetooth phones are handled is just as annoying in the Leaf as it is in the Prius, which disappoints me, given 6 years have passed. The way I'd like to see multiple phones handled is the car chooses to pair with whichever phone is in range, or if multiple phones are in range, it asks or uses the last one it paired with. In reality, it tries to pair with whatever it paired with last time, and one has to press far too many buttons to switch it to one of the other phones it knows about. Range anxiety is definitely something of a concern. It can be managed by using the GPS navigation for long or otherwise anxiety-inducing trips, and then one can compare the "miles remaining" on the GPS with the "miles remaining" on the battery range, and reassure oneself that they will indeed make it to where they're trying to go. The worst case I had was getting to within 5 miles of empty. The car started complaining at me. The charging infrastructure in the Bay Area is pretty good. There are plenty of charging stations in San Francisco and San Jose. I'm spoiled in that I have free charging available at work (including a building at the end of my street, so I never bothered with getting a home charger installed). I've almost never had to pay for charging, so it's been great while gas prices have been on the rise. The car's navigation system knows about some charging stations, so you can plan a route with the charging stations it knows about in mind. The only problem is it doesn't know if the charging stations are in use. If you use the ChargePoint Android app, you can see if the charging stations are in use, but then you have to do this cumbersome dance to find an available charging station and plug the address into the vehicle's navigation system. Of course, what can then happen is in the time you're driving to the charging station, someone else starts using it. I actually got bitten by this yesterday. Would I buy a Leaf again? Not as my sole car. It makes a perfect second/commuter car, but as a primary vehicle, it's too limited by its range. They're also ridiculously expensive in Australia, and Brisbane has absolutely no charging infrastructure.

22 February 2013

Richard Hartmann: Finland I

Finland. Helsinki, Lahti, streets Arriving at Helsinki airport, we filed a claim with Lufthansa as a hard shell suitcase had a splintered corner. We were surprised that so many Finns arrived from Munich with skis, more on that later. We picked up our car and started on our way towards Koli; driving with a top speed of 100 km/h and often being limited to 80 km/h or even 60 km/h is... unusual... Finnish police/authorities seem to be obsessed with enforcing those speed limits as there are a lot of speed cameras along the way. Finnish people seem to be similarly obsessed with slot machines; there is an incredible amount of them at gas stations and a constant stream of people playing them. From an outsider's perspective, it's weird that a country as strict about one form of addiction, alcohol, and working against it vigorously, by means of taxes, would allow another form of addiction, gambling, run as freely and this allow so many slot machines. Speaking of taxes on alcohol: a single 0.33 l bottle of beer is more expensive in a Finnish supermarket than 0.5 l of beer in a German restaurant. Which also explains why supermarkets tend to have a rather large section with relatively cheap alcohol free beer. Anyway, coming back to streets: Highway intersections don't have continuous on/off ramps from which you change from one highway to another; you drive off of the highway, stop at a traffic light, and then continue onto the other highway. Weird system, but given the amount of traffic we witnessed, it's probably Good Enough (tm). Stopping for a short time in Lohti simply because it's apparently famous for winter sports competitions, we arrived at Future Freetime in Koli national park after about five to six gruelling hours of net driving through somewhat bad weather and behind slow drivers. Koli Hiking up to Ukko-Koli and its sister peaks proved to be rather exhausting as we kept on breaking through the snow cover to our knees and sometimes even our hips. Once we were up there, we realized that even though you couldn't see it in between the trees, there was fog all over the plains so we couldn't see anything. Still, it was a nice hike even if somewhat short. Note to self: Even when a trail is marked locally, if OpenStreetMap does not know about it... don't walk along it. Especially not when the going's rough already. And if there's a sign suggesting you wear snow shoes... wear snow shoes. Returning to Koli Hotel and the museum next to it, we walked over to the ski slope. The highest peak within Koli,Ukko-Koli, is 347 meters high, the local ski slope starts a good way below that. This would explain why a lot of Finns came back from Munich with their skis... Afterwards, we rented a snow mobile, without guide or supervision, and drove from Loma-Koli over lake Pielien towards Purnuniemi and in a large circle down towards lake Ryyn skyl where we turned around and went back the same way. If we thought Finnish streets don't have a lot of signs we soon realized that snow mobile tracks have even less. There are at most two or three signs pointing you in the right direction, but on the plus side, there are no posted speed limits for snow mobiles, either. In somewhat related news, snow mobiles can go at least 95 km/h. At that point, the scratched and dirty visor of your rental helmet will keep slamming down, forcing you to take one hand off the handle and thus stop accelerating to maintain stability. To round off the day, we heated up the sauna built into our little wooden hut. Running outside three times to rub myself off with snow from head to toes, I almost slipped and fell while standing still. When your feet are too hot for the snowy ground, you'll start to melt your own little pools of slippery water/snow mush within seconds. File that one under "I would never have guessed unless I had experienced it myself". Generic The MarkDown source of this blog post is not even 5 kiB in size; even in a worst case scenario, pushing this to my ikiwiki instance via git will eat up less 10 kiB of mobile data. Which is good because I have 78 MiB of international data left on this plan. This is also the reason why there are no links in this blog post: I am writing everything off line and don't want to search for the correct URLs to link to. I really wish EU regulators would start to tackle data roaming now that SMS and voice calls are being forced down into somewhat sane pricing regions by regulations. PS:
-rw-r--r-- 1 richih richih 4.6K Feb 11 22:55 11-Finland-I.mdwn
[...]
Writing objects: 100% (7/7), 2.79 KiB, done.

15 February 2013

Josselin Mouette: The DPL game

Following the DPL game call for players, here are my nominations for the fantastic four (in alphabetical order) :
  1. Luca Falavigna
  2. Tollef Fog Heen
  3. Yves-Alexis Perez
  4. Christian Perrier
These are four randomly selected people among those who share an understanding of the Debian community, a sense of leadership, and the ability to drive people forward with solutions.

13 February 2013

Jan Wagner: Searching painting: Strawberry and Man on Pumps with Sparkling wine

We are searching a motive for a painting or a painting itself for a quite while now. This should find it's place in our living room. Unfortunately we didn't found one, which matched our both prospect and/or wasn't compatible with the rest of our living room. Yesterday we stumbled upon a motive which was quite nice, but was too small and it was neighter possible to get it in a bigger size nor to find out who was the origin painter of the picture. Now we are searching for the name of the picture and/or the painter. Any hints appreciated at 'blog - at - waja - dot - info'. A photo with higher resolution can be found here Update: Okay ... an unknown people (many thanks) hinted me, that google image search is the tool that could be very usefull. Google revealed that the painter is Inna Panasenko. P.S. Is it noticeable that I'm in vacation mode? ;)

29 January 2013

Tollef Fog Heen: Abusing sbuild for fun and profit

Over the last couple of weeks, I have been working on getting binary packages for [Varnish] modules built. In the current version, you need to have a built, unpacked source tree to build a module against. This is being fixed in the next version, but until then, I needed to provide this in the build environment somehow. RPMs were surprisingly easy, since our RPM build setup is much simpler and doesn't use mock/mach or other chroot-based tools. Just make a source RPM available and unpack + compile that. Debian packages on the other hand, they were not easy to get going. My first problem was to just get the Varnish source package into the chroot. I ended up making a directory in /var/lib/sbuild/build which is exposed as /build once sbuild runs. The other hard part was getting Varnish itself built. sbuild exposes two hooks that could work: a pre-build hook and a chroot-setup hook. Neither worked: Pre-build is called before the chroot is set up, so we can't build Varnish. Chroot-setup is run before the build-dependencies are installed and it runs as the user invoking sbuild, so it can't install packages. Sparc32 and similar architectures use the linux32 tool to set the personality before building packages. I ended up abusing this, so I set HOME to a temporary directory where I create a .sbuildrc which sets $build_env_cmnd to a script which in turns unpacks the Varnish source, builds it and then chains to dpkg-buildpackage. Of course, the build-dependencies for modules don't include all the build-dependencies for Varnish itself, so I have to extract those from the Varnish source package too. No source available at this point, mostly because it's beyond ugly. I'll see if I can get it cleaned up.

28 January 2013

Tollef Fog Heen: FOSDEM talk: systemd in Debian

Michael Biebl and I are giving a talk on systemd in Debian at FOSDEM on Sunday morning at 10. We'll be talking a bit about the current state in Wheezy, what our plans for Jessie are and what Debian packagers should be aware of. We would love to get input from people about what systemd in Jessie should look like, so if you have any ideas, opinions or insights, please come along. If you're just curious, you are also of course welcome to join.

Next.

Previous.